Web Survey Bibliography
Title Non-response in evaluation of teaching
Author Brinkmoeller, B.; Forthmann, B.; Thielsch, M.
Year 2016
Access date 29.04.2016
Presentation PDF (527KB)
Abstract
Relevance & Research Question
Student evaluations of teaching (SET) gain more and more importance for university teaching, especially web-based SETs. To guarantee the validity of SET and to avoid sample-errors high response rates are necessary – which cannot always be reached. This study investigates different factors, which might give some explanations for nonresponse in SETs. We examined the social exchange-theory, salience, opportunity costs, survey fatigue and the survey mode (online vs. paper-pencil).
Methods and data
We contacted student representatives of 48 universities, additional student groups in social media, and sent approx. 900 invitations via the online-panel PsyWeb. The web-based survey was available for about four months. Participants provided reasons for non-response, information about response behavior, and attitudes. A total of 490 participants (69.59 % female; age: M = 24.10, SD = 4.07) were included in the final sample.
Results
First, our results show a significant influence of social exchange on responding in SET (Δ R²=.069, p < .001). Also the influence of salience (ΔR²=.201, p < .001) and survey fatigue (ΔR²= .078, p < .001) show significant influences on the participation in SETs. No significant effects were found for opportunity costs (ΔR² = .005, p = .166) as well as for the survey mode (ΔR²= .004, p = .209).
Added value
The results of our study can be helpful for online researchers and evaluation managers in reducing non-response. Notably, our findings stress the importance of communication between students: It influences a student’s evaluation behavior if a fellow student evaluates all of his or her lectures and courses. Thus, universities should indicate how many students take part in a current SET to motivate even more students. Furthermore, it is helpful to increase the students’ identification with their own university and SET, for example with special events or university-games. In addition, the consequences of the SET should be public for the students so they become aware of how they can influence the quality of teaching in their faculty. Finally, we could find no evidence for the often made assumption that an online administration of questionnaires leads to a non-response-problem in SETs.
Student evaluations of teaching (SET) gain more and more importance for university teaching, especially web-based SETs. To guarantee the validity of SET and to avoid sample-errors high response rates are necessary – which cannot always be reached. This study investigates different factors, which might give some explanations for nonresponse in SETs. We examined the social exchange-theory, salience, opportunity costs, survey fatigue and the survey mode (online vs. paper-pencil).
Methods and data
We contacted student representatives of 48 universities, additional student groups in social media, and sent approx. 900 invitations via the online-panel PsyWeb. The web-based survey was available for about four months. Participants provided reasons for non-response, information about response behavior, and attitudes. A total of 490 participants (69.59 % female; age: M = 24.10, SD = 4.07) were included in the final sample.
Results
First, our results show a significant influence of social exchange on responding in SET (Δ R²=.069, p < .001). Also the influence of salience (ΔR²=.201, p < .001) and survey fatigue (ΔR²= .078, p < .001) show significant influences on the participation in SETs. No significant effects were found for opportunity costs (ΔR² = .005, p = .166) as well as for the survey mode (ΔR²= .004, p = .209).
Added value
The results of our study can be helpful for online researchers and evaluation managers in reducing non-response. Notably, our findings stress the importance of communication between students: It influences a student’s evaluation behavior if a fellow student evaluates all of his or her lectures and courses. Thus, universities should indicate how many students take part in a current SET to motivate even more students. Furthermore, it is helpful to increase the students’ identification with their own university and SET, for example with special events or university-games. In addition, the consequences of the SET should be public for the students so they become aware of how they can influence the quality of teaching in their faculty. Finally, we could find no evidence for the often made assumption that an online administration of questionnaires leads to a non-response-problem in SETs.
Access/Direct link Conference Homepage (presentation)
Year of publication2016
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography (4086)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not...; 2017; Toepoel, V.; Emerson, H.
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Social desirability bias in self-reported well-being measures: evidence from an online survey; 2017; Caputo, A.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- A Meta-Analysis of the Effects of Incentives on Response Rate in Online Survey Studies; 2017; Mohammad Asire, A.
- Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote...; 2017; Breton, C.; Cutler, F.; Lachance, S.; Mierke-Zatwarnicki, A.
- Examining Factors Impacting Online Survey Response Ratesin Educational Research: Perceptions of Graduate...; 2017; Saleh, A.; Bista, K.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Paradata as an aide to questionnaire design: Improving quality and reducing burden; 2017; Timm, E.; Stewart, J.; Sidney, I.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- A Comparison of Emerging Pretesting Methods for Evaluating “Modern” Surveys; 2017; Geisen, E., Murphy, J.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- The 2016 Canadian Census: An Innovative Wave Collection Methodology to Maximize Self-Response and Internet...; 2017; Mathieu, P.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Redirected Inbound Call Sampling (RICS); A New Methodology ; 2017; Krotki, K.; Bobashev, G.; Levine, B.; Richards, S.
- An Empirical Process for Using Non-probability Survey for Inference; 2017; Tortora, R.; Iachan, R.
- The perils of non-probability sampling; 2017; Bethlehem, J.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Necessary but Insufficient: Why Measurement Invariance Tests Need Online Probing as a Complementary...; 2017; Meitinger, K.
- Nonresponse in Organizational Surveying: Attitudinal Distribution Form and Conditional Response Probabilities...; 2017; Kulas, J. T.; Robinson, D. H.; Kellar, D. Z.; Smith, J. A.
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Is There a Future for Surveys; 2017; Miller, P. V.
- Reducing speeding in web surveys by providing immediate feedback; 2017; Conrad, F.; Tourangeau, R.; Couper, M. P.; Zhang, C.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- A Working Example of How to Use Artificial Intelligence To Automate and Transform Surveys Into Customer...; 2017; Neve, S.
- A Case Study on Evaluating the Relevance of Some Rules for Writing Requirements through an Online Survey...; 2017; Warnier, M.; Condamines, A.
- Estimating the Impact of Measurement Differences Introduced by Efforts to Reach a Balanced Response...; 2017; Kappelhof, J. W. S.; De Leeuw, E. D.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.